Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
translated by 谷歌翻译
本文解决了Terahertz(THZ)通道估计中的两个主要挑战:光束切割现象,即由于频率独立的模拟束缚器和计算复杂性,由于使用超质量数量,因此由于频率非依赖性的模拟光束器和计算复杂性。已知数据驱动的技术可以减轻此问题的复杂性,但通常需要将数据集从用户传输到中央服务器,从而带来了巨大的通信开销。在这项工作中,我们采用联合学习(FL),其中用户仅传输模型参数,而不是整个数据集,以供THZ频道估计来提高通信效率。为了准确估算横梁切开,我们提出了Beamspace支持对准技术,而无需其他硬件。与以前的作品相比,我们的方法提供了更高的频道估计准确性,以及大约$ 68 $ $ 68 $倍的通信开销。
translated by 谷歌翻译
随着第五代(5G)无线系统在全球范围内收集动力的部署,6G的可能技术正在积极的研究讨论下。特别是,机器学习(ML)在6G中的作用有望增强和帮助新兴应用,例如虚拟和增强现实,车辆自治和计算机视觉。这将导致大量的无线数据流量包括图像,视频和语音。 ML算法通过位于云服务器上的学习模型来处理这些分类/识别/估计。这需要将数据从边缘设备无线传输到云服务器。与识别步骤分开处理的渠道估计对于准确的学习绩效至关重要。为了结合通道和ML数据的学习,我们引入了隐式渠道学习以执行ML任务而不估计无线通道。在这里,ML模型通过通道腐败的数据集训练,代替名义数据。没有通道估计,该提出的方法在各种情况(例如毫米波和IEEE 802.11p车辆通道)方面的图像和语音分类任务上显示了大约60%的改善。
translated by 谷歌翻译
联合学习(FL)能够通过定期聚合培训的本地参数来在多个边缘用户执行大的分布式机器学习任务。为了解决在无线迷雾云系统上实现支持的关键挑战(例如,非IID数据,用户异质性),我们首先基于联合平均(称为FedFog)的高效流行算法来执行梯度参数的本地聚合在云端的FOG服务器和全球培训更新。接下来,我们通过调查新的网络知识的流动系统,在无线雾云系统中雇用FEDFog,这促使了全局损失和完成时间之间的平衡。然后开发了一种迭代算法以获得系统性能的精确测量,这有助于设计有效的停止标准以输出适当数量的全局轮次。为了缓解级体效果,我们提出了一种灵活的用户聚合策略,可以先培训快速用户在允许慢速用户加入全局培训更新之前获得一定程度的准确性。提供了使用若干现实世界流行任务的广泛数值结果来验证FEDFOG的理论融合。我们还表明,拟议的FL和通信的共同设计对于在实现学习模型的可比准确性的同时,基本上提高资源利用是必要的。
translated by 谷歌翻译
Being able to forecast the popularity of new garment designs is very important in an industry as fast paced as fashion, both in terms of profitability and reducing the problem of unsold inventory. Here, we attempt to address this task in order to provide informative forecasts to fashion designers within a virtual reality designer application that will allow them to fine tune their creations based on current consumer preferences within an interactive and immersive environment. To achieve this we have to deal with the following central challenges: (1) the proposed method should not hinder the creative process and thus it has to rely only on the garment's visual characteristics, (2) the new garment lacks historical data from which to extrapolate their future popularity and (3) fashion trends in general are highly dynamical. To this end, we develop a computer vision pipeline fine tuned on fashion imagery in order to extract relevant visual features along with the category and attributes of the garment. We propose a hierarchical label sharing (HLS) pipeline for automatically capturing hierarchical relations among fashion categories and attributes. Moreover, we propose MuQAR, a Multimodal Quasi-AutoRegressive neural network that forecasts the popularity of new garments by combining their visual features and categorical features while an autoregressive neural network is modelling the popularity time series of the garment's category and attributes. Both the proposed HLS and MuQAR prove capable of surpassing the current state-of-the-art in key benchmark datasets, DeepFashion for image classification and VISUELLE for new garment sales forecasting.
translated by 谷歌翻译
In this paper, we address the problem of image splicing localization with a multi-stream network architecture that processes the raw RGB image in parallel with other handcrafted forensic signals. Unlike previous methods that either use only the RGB images or stack several signals in a channel-wise manner, we propose an encoder-decoder architecture that consists of multiple encoder streams. Each stream is fed with either the tampered image or handcrafted signals and processes them separately to capture relevant information from each one independently. Finally, the extracted features from the multiple streams are fused in the bottleneck of the architecture and propagated to the decoder network that generates the output localization map. We experiment with two handcrafted algorithms, i.e., DCT and Splicebuster. Our proposed approach is benchmarked on three public forensics datasets, demonstrating competitive performance against several competing methods and achieving state-of-the-art results, e.g., 0.898 AUC on CASIA.
translated by 谷歌翻译
The sheer volume of online user-generated content has rendered content moderation technologies essential in order to protect digital platform audiences from content that may cause anxiety, worry, or concern. Despite the efforts towards developing automated solutions to tackle this problem, creating accurate models remains challenging due to the lack of adequate task-specific training data. The fact that manually annotating such data is a highly demanding procedure that could severely affect the annotators' emotional well-being is directly related to the latter limitation. In this paper, we propose the CM-Refinery framework that leverages large-scale multimedia datasets to automatically extend initial training datasets with hard examples that can refine content moderation models, while significantly reducing the involvement of human annotators. We apply our method on two model adaptation strategies designed with respect to the different challenges observed while collecting data, i.e. lack of (i) task-specific negative data or (ii) both positive and negative data. Additionally, we introduce a diversity criterion applied to the data collection process that further enhances the generalization performance of the refined models. The proposed method is evaluated on the Not Safe for Work (NSFW) and disturbing content detection tasks on benchmark datasets achieving 1.32% and 1.94% accuracy improvements compared to the state of the art, respectively. Finally, it significantly reduces human involvement, as 92.54% of data are automatically annotated in case of disturbing content while no human intervention is required for the NSFW task.
translated by 谷歌翻译
In this paper, we introduce MINTIME, a video deepfake detection approach that captures spatial and temporal anomalies and handles instances of multiple people in the same video and variations in face sizes. Previous approaches disregard such information either by using simple a-posteriori aggregation schemes, i.e., average or max operation, or using only one identity for the inference, i.e., the largest one. On the contrary, the proposed approach builds on a Spatio-Temporal TimeSformer combined with a Convolutional Neural Network backbone to capture spatio-temporal anomalies from the face sequences of multiple identities depicted in a video. This is achieved through an Identity-aware Attention mechanism that attends to each face sequence independently based on a masking operation and facilitates video-level aggregation. In addition, two novel embeddings are employed: (i) the Temporal Coherent Positional Embedding that encodes each face sequence's temporal information and (ii) the Size Embedding that encodes the size of the faces as a ratio to the video frame size. These extensions allow our system to adapt particularly well in the wild by learning how to aggregate information of multiple identities, which is usually disregarded by other methods in the literature. It achieves state-of-the-art results on the ForgeryNet dataset with an improvement of up to 14% AUC in videos containing multiple people and demonstrates ample generalization capabilities in cross-forgery and cross-dataset settings. The code is publicly available at https://github.com/davide-coccomini/MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection.
translated by 谷歌翻译
班级失衡对机器学习构成了重大挑战,因为大多数监督学习模型可能对多数级别和少数族裔表现不佳表现出偏见。成本敏感的学习通过以不同的方式处理类别,通常通过用户定义的固定错误分类成本矩阵来解决此问题,以提供给学习者的输入。这种参数调整是一项具有挑战性的任务,需要域知识,此外,错误的调整可能会导致整体预测性能恶化。在这项工作中,我们为不平衡数据提出了一种新颖的成本敏感方法,该方法可以动态地调整错误分类的成本,以响应Model的性能,而不是使用固定的错误分类成本矩阵。我们的方法称为ADACC,是无参数的,因为它依赖于增强模型的累积行为,以便调整下一次增强回合的错误分类成本,并具有有关培训错误的理论保证。来自不同领域的27个现实世界数据集的实验表明,我们方法的优势超过了12种最先进的成本敏感方法,这些方法在不同度量方面表现出一致的改进,例如[0.3] AUC的%-28.56%],平衡精度[3.4%-21.4%],Gmean [4.8%-45%]和[7.4%-85.5%]用于召回。
translated by 谷歌翻译
逆源问题对于声学,地球物理学,非破坏性测试等的许多应用是至关重要的。传统成像方法受到分辨率极限的影响,防止源的区别比发射的波长小于发射的波长。在这项工作中,我们提出了一种基于物理信息的神经网络来解决源重新关注问题的方法,构建了一个新颖的损失项,该损失术语促进了网络的超解决能力,并基于波传播的物理。我们证明了在二维矩形波导中通过沿垂直横截面的波场记录的测量值进行成像的设置中的方法。结果表明,即使将彼此靠近时,该方法的能力也可以高精度近似于源的位置。
translated by 谷歌翻译